781 research outputs found

    Superpolynomial lower bounds for general homogeneous depth 4 arithmetic circuits

    Full text link
    In this paper, we prove superpolynomial lower bounds for the class of homogeneous depth 4 arithmetic circuits. We give an explicit polynomial in VNP of degree nn in n2n^2 variables such that any homogeneous depth 4 arithmetic circuit computing it must have size nΩ(loglogn)n^{\Omega(\log \log n)}. Our results extend the works of Nisan-Wigderson [NW95] (which showed superpolynomial lower bounds for homogeneous depth 3 circuits), Gupta-Kamath-Kayal-Saptharishi and Kayal-Saha-Saptharishi [GKKS13, KSS13] (which showed superpolynomial lower bounds for homogeneous depth 4 circuits with bounded bottom fan-in), Kumar-Saraf [KS13a] (which showed superpolynomial lower bounds for homogeneous depth 4 circuits with bounded top fan-in) and Raz-Yehudayoff and Fournier-Limaye-Malod-Srinivasan [RY08, FLMS13] (which showed superpolynomial lower bounds for multilinear depth 4 circuits). Several of these results in fact showed exponential lower bounds. The main ingredient in our proof is a new complexity measure of {\it bounded support} shifted partial derivatives. This measure allows us to prove exponential lower bounds for homogeneous depth 4 circuits where all the monomials computed at the bottom layer have {\it bounded support} (but possibly unbounded degree/fan-in), strengthening the results of Gupta et al and Kayal et al [GKKS13, KSS13]. This new lower bound combined with a careful "random restriction" procedure (that transforms general depth 4 homogeneous circuits to depth 4 circuits with bounded support) gives us our final result

    On Tractable Exponential Sums

    Full text link
    We consider the problem of evaluating certain exponential sums. These sums take the form x1,...,xnZNef(x1,...,xn)2πi/N\sum_{x_1,...,x_n \in Z_N} e^{f(x_1,...,x_n) {2 \pi i / N}} , where each x_i is summed over a ring Z_N, and f(x_1,...,x_n) is a multivariate polynomial with integer coefficients. We show that the sum can be evaluated in polynomial time in n and log N when f is a quadratic polynomial. This is true even when the factorization of N is unknown. Previously, this was known for a prime modulus N. On the other hand, for very specific families of polynomials of degree \ge 3, we show the problem is #P-hard, even for any fixed prime or prime power modulus. This leads to a complexity dichotomy theorem - a complete classification of each problem to be either computable in polynomial time or #P-hard - for a class of exponential sums. These sums arise in the classifications of graph homomorphisms and some other counting CSP type problems, and these results lead to complexity dichotomy theorems. For the polynomial-time algorithm, Gauss sums form the basic building blocks. For the hardness results, we prove group-theoretic necessary conditions for tractability. These tests imply that the problem is #P-hard for even very restricted families of simple cubic polynomials over fixed modulus N

    Efficient classical simulation of slightly entangled quantum computations

    Get PDF
    We present a scheme to efficiently simulate, with a classical computer, the dynamics of multipartite quantum systems on which the amount of entanglement (or of correlations in the case of mixed-state dynamics) is conveniently restricted. The evolution of a pure state of n qubits can be simulated by using computational resources that grow linearly in n and exponentially in the entanglement. We show that a pure-state quantum computation can only yield an exponential speed-up with respect to classical computations if the entanglement increases with the size n of the computation, and gives a lower bound on the required growth.Comment: 4 pages. Major changes. Significantly improved simulation schem

    Probabilistic Model Counting with Short XORs

    Full text link
    The idea of counting the number of satisfying truth assignments (models) of a formula by adding random parity constraints can be traced back to the seminal work of Valiant and Vazirani, showing that NP is as easy as detecting unique solutions. While theoretically sound, the random parity constraints in that construction have the following drawback: each constraint, on average, involves half of all variables. As a result, the branching factor associated with searching for models that also satisfy the parity constraints quickly gets out of hand. In this work we prove that one can work with much shorter parity constraints and still get rigorous mathematical guarantees, especially when the number of models is large so that many constraints need to be added. Our work is based on the realization that the essential feature for random systems of parity constraints to be useful in probabilistic model counting is that the geometry of their set of solutions resembles an error-correcting code.Comment: To appear in SAT 1

    Efficient solvability of Hamiltonians and limits on the power of some quantum computational models

    Full text link
    We consider quantum computational models defined via a Lie-algebraic theory. In these models, specified initial states are acted on by Lie-algebraic quantum gates and the expectation values of Lie algebra elements are measured at the end. We show that these models can be efficiently simulated on a classical computer in time polynomial in the dimension of the algebra, regardless of the dimension of the Hilbert space where the algebra acts. Similar results hold for the computation of the expectation value of operators implemented by a gate-sequence. We introduce a Lie-algebraic notion of generalized mean-field Hamiltonians and show that they are efficiently ("exactly") solvable by means of a Jacobi-like diagonalization method. Our results generalize earlier ones on fermionic linear optics computation and provide insight into the source of the power of the conventional model of quantum computation.Comment: 6 pages; no figure

    Tripartite to Bipartite Entanglement Transformations and Polynomial Identity Testing

    Full text link
    We consider the problem of deciding if a given three-party entangled pure state can be converted, with a non-zero success probability, into a given two-party pure state through local quantum operations and classical communication. We show that this question is equivalent to the well-known computational problem of deciding if a multivariate polynomial is identically zero. Efficient randomized algorithms developed to study the latter can thus be applied to the question of tripartite to bipartite entanglement transformations

    A dichotomy for non-repeating queries with negation in probabilistic databases

    Full text link
    This paper shows that any non-repeating conjunctive rela-tional query with negation has either polynomial time or #P-hard data complexity on tuple-independent probabilis-tic databases. This result extends a dichotomy by Dalvi and Suciu for non-repeating conjunctive queries to queries with negation. The tractable queries with negation are precisely the hierarchical ones and can be recognised efficiently. 1

    Testing probability distributions underlying aggregated data

    Full text link
    In this paper, we analyze and study a hybrid model for testing and learning probability distributions. Here, in addition to samples, the testing algorithm is provided with one of two different types of oracles to the unknown distribution DD over [n][n]. More precisely, we define both the dual and cumulative dual access models, in which the algorithm AA can both sample from DD and respectively, for any i[n]i\in[n], - query the probability mass D(i)D(i) (query access); or - get the total mass of {1,,i}\{1,\dots,i\}, i.e. j=1iD(j)\sum_{j=1}^i D(j) (cumulative access) These two models, by generalizing the previously studied sampling and query oracle models, allow us to bypass the strong lower bounds established for a number of problems in these settings, while capturing several interesting aspects of these problems -- and providing new insight on the limitations of the models. Finally, we show that while the testing algorithms can be in most cases strictly more efficient, some tasks remain hard even with this additional power
    corecore